Our aim is to automatically create conda packages from PyPI. I wrote small independent script to utilize conda-skeleton
and conda-build
to automatically create conda packages, now I think I should abandon that approach and utilize the infrastructure provided by conda to complete the task, for example it already has the tools to query PyPI, access package indexes and versions etc.
Mirror PyPI on the ec2 instance: will allow faster access to repositories and PyPI data. We can use following tools to do the job:
- Bandersnatch
- DevPi
- PEP 381 client
- PyPI-mirror by Open Stack
- Binstar docs also has a page named “Mirroring PyPI” but I’m not sure what it does.
- Some these packages also provides tools to automatically sync the local mirror with PyPI, which can be very helpful.
- “Theoretically” we should be able to build all the pure python packages, non pure python packages can be problem. I plan create a dependency graph of packages, which should ideally be a tree. Then we will just need to classify the leaf packages as pure python or non pure python. This way we can create a list of package we can “aim” to build automatically. This can also be important when we cannot build the package for some important dependency, then we will already build process which packages will be affected.